Goto

Collaborating Authors

 base distribution


SANFlow: Semantic-Aware Normalizing Flow for Anomaly Detection and Localization

Neural Information Processing Systems

However, previous NF-based methods forcibly transform the distribution of all features into a single distribution (e.g., unit normal distribution), even when the features can have locally distinct semantic information and thus follow different


Almost Free: Self-concordance in Natural Exponential Families and an Application to Bandits

Neural Information Processing Systems

We study how tail properties of the base distribution of a NEF impose limits on the NEF: if the base distribution is subexponential (subgaussian), we show that the NEF is self-concordant with a stretch factor that grows inverse quadratically (respectively, linearly) 2.





Generator-based Graph Generation via Heat Diffusion

Stephenson, Anthony, Gallagher, Ian, Nemeth, Christopher

arXiv.org Machine Learning

Graph generative modelling has become an essential task due to the wide range of applications in chemistry, biology, social networks, and knowledge representation. In this work, we propose a novel framework for generating graphs by adapting the Generator Matching (arXiv:2410.20587) paradigm to graph-structured data. We leverage the graph Laplacian and its associated heat kernel to define a continous-time diffusion on each graph. The Laplacian serves as the infinitesimal generator of this diffusion, and its heat kernel provides a family of conditional perturbations of the initial graph. A neural network is trained to match this generator by minimising a Bregman divergence between the true generator and a learnable surrogate. Once trained, the surrogate generator is used to simulate a time-reversed diffusion process to sample new graph structures. Our framework unifies and generalises existing diffusion-based graph generative models, injecting domain-specific inductive bias via the Laplacian, while retaining the flexibility of neural approximators. Experimental studies demonstrate that our approach captures structural properties of real and synthetic graphs effectively.


Conditional Normalizing Flows for Forward and Backward Joint State and Parameter Estimation

Lagunowich, Luke S., Tong, Guoxiang Grayson, Schiavazzi, Daniele E.

arXiv.org Machine Learning

Traditional filtering algorithms for state estimation -- such as classical Kalman filtering, unscented Kalman filtering, and particle filters - show performance degradation when applied to nonlinear systems whose uncertainty follows arbitrary non-Gaussian, and potentially multi-modal distributions. This study reviews recent approaches to state estimation via nonlinear filtering based on conditional normalizing flows, where the conditional embedding is generated by standard MLP architectures, transformers or selective state-space models (like Mamba-SSM). In addition, we test the effectiveness of an optimal-transport-inspired kinetic loss term in mitigating overparameterization in flows consisting of a large collection of transformations. We investigate the performance of these approaches on applications relevant to autonomous driving and patient population dynamics, paying special attention to how they handle time inversion and chained predictions. Finally, we assess the performance of various conditioning strategies for an application to real-world COVID-19 joint SIR system forecasting and parameter estimation.


Learning Optimal Flows for Non-Equilibrium Importance Sampling

Neural Information Processing Systems

Many applications in computational sciences and statistical inference require the computation of expectations with respect to complex high-dimensional distributions with unknown normalization constants, as well as the estimation of these constants. Here we develop a method to perform these calculations based on generating samples from a simple base distribution, transporting them by the flow generated by a velocity field, and performing averages along these flowlines. This non-equilibrium importance sampling (NEIS) strategy is straightforward to implement and can be used for calculations with arbitrary target distributions. On the theory side, we discuss how to tailor the velocity field to the target and establish general conditions under which the proposed estimator is a perfect estimator with zero-variance. We also draw connections between NEIS and approaches based on mapping a base distribution onto a target via a transport map. On the computational side, we show how to use deep learning to represent the velocity field by a neural network and train it towards the zero variance optimum. These results are illustrated numerically on benchmark examples (with dimension up to $10$), where after training the velocity field, the variance of the NEIS estimator is reduced by up to $6$ orders of magnitude than that of a vanilla estimator. We also compare the performances of NEIS with those of Neal's annealed importance sampling (AIS).


A General Approach to Visualizing Uncertainty in Statistical Graphics

Petek, Bernarda, Nabergoj, David, Štrumbelj, Erik

arXiv.org Artificial Intelligence

We present a general approach to visualizing uncertainty in static 2-D statistical graphics. If we treat a visualization as a function of its underlying quantities, uncertainty in those quantities induces a distribution over images. We show how to aggregate these images into a single visualization that represents the uncertainty. The approach can be viewed as a generalization of sample-based approaches that use overlay. Notably, standard representations, such as confidence intervals and bands, emerge with their usual coverage guarantees without being explicitly quantified or visualized. As a proof of concept, we implement our approach in the IID setting using resampling, provided as an open-source Python library. Because the approach operates directly on images, the user needs only to supply the data and the code for visualizing the quantities of interest without uncertainty. Through several examples, we show how both familiar and novel forms of uncertainty visualization can be created. The implementation is not only a practical validation of the underlying theory but also an immediately usable tool that can complement existing uncertainty-visualization libraries.


Amortized Inference of Multi-Modal Posteriors using Likelihood-Weighted Normalizing Flows

Baruah, Rajneil

arXiv.org Artificial Intelligence

Across diverse domains--from complex systems and finance to high-energy physics and astrophysics--scientific inquiry often relies on deriving theoretical parameters from observational data [1]. At the core of this challenge lies the inverse problem: inferring the posterior distribution of theoretical parameters given a set of observables [2]. Traditional approaches for posterior estimation rely on sampling algorithms such as Markov Chain Monte Carlo (MCMC) [3, 4] and Nested Sampling (NS) [5]. In astrophysics and cosmology, implementations like emcee [6] and dynesty [7] have become standard tools. While these frameworks are statistically robust, they suffer significantly from the curse of dimensionality. In real-world scenarios, where the parameter space is high-dimensional and the likelihood function relies on computationally expensive simulators (e.g., in particle physics phenomenology [8]), convergence can take weeks or even months. Recent advances in machine learning have introduced Normalizing Flows (NFs) as a powerful alternative for probabilistic modelling [9, 10]. By learning a bijective mapping between a simple base distribution (e.g., a Gaussian) and the complex target distribution, NFs allow for exact density estimation and efficient sampling [11] from the target distribution. Modern architectures, such as RealNVP [12] and Neural Spline Flows [13], offer enough expressivity to model highly complex distributions.